Goto

Collaborating Authors

 trade flow


Multidimensional Knowledge Graph Embeddings for International Trade Flow Analysis

Nandini, Durgesh, Bloethner, Simon, Schoenfeld, Mirco, Larch, Mario

arXiv.org Artificial Intelligence

Understanding the complex dynamics of high-dimensional, contingent, and strongly nonlinear economic data, often shaped by multiplicative processes, poses significant challenges for traditional regression methods as such methods offer limited capacity to capture the structural changes they feature. To address this, we propose leveraging the potential of knowledge graph embeddings for economic trade data, in particular, to predict international trade relationships. We implement KonecoKG, a knowledge graph representation of economic trade data with multidimensional relationships using SDM-RDFizer, and transform the relationships into a knowledge graph embedding using AmpliGraph.


Accurate prediction of international trade flows: Leveraging knowledge graphs and their embeddings

Rincon-Yanez, Diego, Ounoughi, Chahinez, Sellami, Bassem, Kalvet, Tarmo, Tiits, Marek, Senatore, Sabrina, Yahia, Sadok Ben

arXiv.org Artificial Intelligence

As a result, KR is critical to offering a simple strategy for defining relevant and contextual information within a finite number of facts from a specific domain of interest; these facts are referred to as a knowledge base (KB). In the past years, Knowledge Graph (KG), as a form of KR, has gained attention because it provides a contextual, natural, and human-like form of representing knowledge in specific domains and common sense. KG is formed in statements called triples on the T = (h, r, t) form, where h (head) and t (tail) represent objects in real life, and r, the relation is the connection between those entities. Internet companies like Google, Wikipedia, and Facebook have found a simple but powerful unified tool in the KG field to describe their multi-structured and multi-dimensional knowledge base, capturing user data to transform it into vast KBs [3]. The KG approach is particularly relevant to studying international trade, a significant cornerstone of economic and social development in the globalized economy [4, 5]. International trade is complex and interconnected, with multiple entities (commodities, companies, and countries) interacting in multiple ways [6]. This method helps to understand those complex interactions in a structured and intuitive way. In international economics, the gravity model, a fundamental part of the current method, is widely used to predict trade relations between entities based on factors like size (GDP, population) and distance or other factors [7, 8, 9].


A New Approach to Overcoming Zero Trade in Gravity Models to Avoid Indefinite Values in Linear Logarithmic Equations and Parameter Verification Using Machine Learning

Abdullah, Mikrajuddin

arXiv.org Artificial Intelligence

The presence of a high number of zero flow trades continues to provide a challenge in identifying gravity parameters to explain international trade using the gravity model. Linear regression with a logarithmic linear equation encounters an indefinite value on the logarithmic trade. Although several approaches to solving this problem have been proposed, the majority of them are no longer based on linear regression, making the process of finding solutions more complex. In this work, we suggest a two-step technique for determining the gravity parameters: first, perform linear regression locally to establish a dummy value to substitute trade flow zero, and then estimating the gravity parameters. Iterative techniques are used to determine the optimum parameters. Machine learning is used to test the estimated parameters by analyzing their position in the cluster. We calculated international trade figures for 2004, 2009, 2014, and 2019. We just examine the classic gravity equation and discover that the powers of GDP and distance are in the same cluster and are both worth roughly one. The strategy presented here can be used to solve other problems involving log-linear regression.


Mapping Global Value Chains at the Product Level

Karbevska, Lea, Hidalgo, César A.

arXiv.org Artificial Intelligence

Value chain data is crucial to navigate economic disruptions, such as those caused by the COVID-19 pandemic and the war in Ukraine. Yet, despite its importance, publicly available value chain datasets, such as the ``World Input-Output Database'', ``Inter-Country Input-Output Tables'', ``EXIOBASE'' or the ``EORA'', lack detailed information about products (e.g. Radio Receivers, Telephones, Electrical Capacitors, LCDs, etc.) and rely instead on more aggregate industrial sectors (e.g. Electrical Equipment, Telecommunications). Here, we introduce a method based on machine learning and trade theory to infer product-level value chain relationships from fine-grained international trade data. We apply our method to data summarizing the exports and imports of 300+ world regions (e.g. states in the U.S., prefectures in Japan, etc.) and 1200+ products to infer value chain information implicit in their trade patterns. Furthermore, we use proportional allocation to assign the trade flow between regions and countries. This work provides an approximate method to map value chain data at the product level with a relevant trade flow, that should be of interest to people working in logistics, trade, and sustainable development.


DP World targets 700 technology staff in India by mid-2023 - Express Computer

#artificialintelligence

DP World is continuing its rapid growth in India's technology market, with the launch of its latest innovation centre in Gurugram, which will soon host 240 staff working on critical solutions for global supply chains. DP World's presence has grown rapidly to match its ambitions in the digital trade sphere. The company had just 50 employees in India working on technology solutions at the beginning of 2021. This number has already grown to more than 450 with the opening of three centres this year. As this exponential growth continues, employee headcount is expected to reach as many as 700 by the middle of next year.


Machine learning is tearing down language barriers. What does this mean for trade? 7wData

#artificialintelligence

We all know what "intelligence" means and what "artificial" means; yet, put them together into "artificial intelligence" and we get confusion, trepidation or maybe even laughter. The phrase "artificial intelligence" may not ring the same bells for everyone. One form of artificial intelligence that should be ringing everyone's bells is called machine learning. Machine learning is transforming our world faster than most realize; in particular, it is transforming trade. Machine learning represents a radical change in the way computers "think".


Big Data in economics

#artificialintelligence

Big Data refers to data sets of much larger size, higher frequency, and often more personalized information. Examples include data collected by smart sensors in homes or aggregation of tweets on Twitter. In small data sets, traditional econometric methods tend to outperform more complex techniques. In large data sets, however, machine learning methods shine. New analytic approaches are needed to make the most of Big Data in economics. Researchers and policymakers should thus pay close attention to recent developments in machine learning techniques if they want to fully take advantage of these new sources of Big Data. Complex data are now available, characterized by large volume, fast velocity, diverse varieties, and the ability to link many data sets together. Powerful new analytic techniques derived from machine learning are increasingly part of the mainstream econometric toolbox. Big Data allows for better prediction of economic phenomena and improves causal inference. Machine learning techniques allow researchers to create simple models that describe very large, complex data sets.


Big Data in economics

#artificialintelligence

Big Data refers to data sets of much larger size, higher frequency, and often more personalized information. Examples include data collected by smart sensors in homes or aggregation of tweets on Twitter. In small data sets, traditional econometric methods tend to outperform more complex techniques. In large data sets, however, machine learning methods shine. New analytic approaches are needed to make the most of Big Data in economics. Researchers and policymakers should thus pay close attention to recent developments in machine learning techniques if they want to fully take advantage of these new sources of Big Data. Complex data are now available, characterized by large volume, fast velocity, diverse varieties, and the ability to link many data sets together. Powerful new analytic techniques derived from machine learning are increasingly part of the mainstream econometric toolbox. Big Data allows for better prediction of economic phenomena and improves causal inference. Machine learning techniques allow researchers to create simple models that describe very large, complex data sets.